✦ AI-powered · Free to use · Built by students

Card sorting & tree testing,
without the price tag

TreeTest AI is a free, AI-assisted research pipeline for information architecture. Run card sorts, generate site maps, and validate navigation with tree tests — all in one tool.

One pipeline, end to end
From raw content items to a validated navigation structure — AI handles the heavy lifting so you can focus on the research.
Card Sort
Upload your content items and run open, closed, or hybrid card sorts. Share a link — participants sort in their browser, no software needed.
AI Site Map
AI clusters your card sort results into a navigation structure using participant language. Review, edit, and approve before testing.
Tree Test
AI writes realistic tasks with verified paths. Collect unmoderated responses, then get a full dashboard with success rates, SEQ scores, and path analysis.
Rich Dashboards
Similarity matrices, first-click heatmaps, confusion tables, dendrograms, and more — all the visualizations you'd expect from enterprise tools.
AI Insights
Get prioritized recommendations, problem areas, and strengths — AI reads your data so you can walk into stakeholder meetings with a clear story.
Iterate & Improve
Apply AI suggestions, re-run tree tests, and track improvement across rounds. The pipeline loops until your navigation is solid.
Built by students, for researchers who can't afford $300/month tools

We're UX students who ran into the same wall every semester: industry-standard user testing tools are prohibitively expensive. Optimal Workshop, Maze, UserTesting — they're built for enterprise budgets, not student projects or indie teams.

So we built our own. TreeTest AI uses AI to replace the manual overhead that makes these tools expensive — generating tasks, clustering card sort data, writing analysis, validating paths. The result is a tool that does what a $3,000/year subscription does, for free, running entirely in your browser with your own AI API key.

This isn't a watered-down demo. It's the same tool we use for our own research projects — complete with unmoderated testing, real-time response collection, and publication-ready dashboards. We believe access to good research tools shouldn't depend on your budget.

Built by
New Study
Step 1 of 7

Study details

Give your study a name, describe what you're testing, and optionally add a research plan so AI can write better tasks.

Study name
Study description (helps AI write better tasks)
Research plan (optional — research questions, user goals, hypotheses; AI uses this for task generation)
Card Sort Setup

Define your card sort

Choose how participants will categorise items, then add the cards they'll be sorting.

Card sort type
Choose how participants will categorise your items.
OP
Open
Participants create their own category names
CL
Closed
You define the categories; participants sort into them
HY
Hybrid
You provide seed categories; participants can rename or add
Define categories
Add the categories participants will sort into. Assign hierarchy levels to define primary vs. secondary groupings.
0 categories added
Content items
These are what participants will sort into groups. Type each item and press Enter.
0 items added
Or import items
Paste a list (one per line or comma-separated), upload a file, or drop a screenshot for AI to extract items from.
Add at least 5 items to continue
Step 2 of 7

Your card sort is ready

AI has built a hybrid card sort activity with your items. Share the link — participants will group items in whatever way makes sense to them.

Items to sort
0
Seed categories
Closed — fixed categories
0
Responses so far
Share with participants
We recommend 15–20 responses for reliable clustering. The link stays open until you close it.
Live responses
Collecting
Waiting for first response…
Step 3 of 7

Card sort results

Here's how participants grouped your items. Review the analysis below, then generate your site map.

0
Participants
Agreement score
Avg. completion
✦ AI Insights Card sort analysis
No responses yet. Summary will appear once participants complete the sort.
Problem areas
Strengths
Item confusion table
Items placed in the most different categories — highest ambiguity first.
Item # Groups Top group 2nd group
Top participant category names
Co-sort heatmap
Items frequently sorted together by the same participants.
Co-sort similarity matrix
% of participants who placed each pair in the same group. White = 0%, deep blue = 100%.
✦ AI Insights Card sort analysis
No responses yet. Insights will appear once participants complete the sort.
Step 4 of 7

Proposed site map

AI has clustered your card sort data into a navigation structure. Review and edit any labels, then approve it to generate your tree test.

Import a site map
Paste indented text, upload a CSV, or drop a screenshot.
Indent with 2 spaces (or tabs) per level. JSON tree also accepted.
Site map structure
Click any label to rename. Use "+ child" to nest, × to remove.
Step 5 of 7

Tree test is ready

Your tasks are ready to share. Every path is verified against your site map. Review them below, then share the link.

0
Tasks generated
Branches covered
0
Responses so far
Generated tasks
Share with participants
We recommend at least 15 responses for reliable tree test data.
Step 6 of 7

Tree test results

Here's how well participants found items in your navigation. Tasks below 70% success are worth investigating.

0
Participants
Overall success
Avg. SEQ score / 7
Task success breakdown
Found it directly   Found it (with backtracking)   Didn't find it
Participant responses 0 participants
Direct   ~ Indirect   Fail   Skip
PID Timestamp T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 Score
First-click matrix
Heatmap of where participants first clicked per task. Darker = more clicks. ✓ = correct category.
SEQ scores by task
Mean perceived ease per task (1–7). Dashed line = 5.5 benchmark.
Success x Time matrix
Each dot = one task. Top-left = slow & unsuccessful. Bottom-right = fast & successful.
Time on task
Box = Q1–Q3, centre line = median, whiskers = min/max.
Navigation paths
Tree structure showing paths participants navigated. Thicker lines = more traffic. Red = incorrect path. Green = correct path.
✦ Insights Tree test analysis
Overall assessment
Waiting for responses…
Problem areas
Strengths
Prioritised recommendations
Step 7 of 7 · Iteration 1

AI's suggested improvements

Based on the tree test data, here's what AI recommends changing. Apply what you agree with, then run another round to confirm the improvements.

1
Refinement round · waiting for results
Updated site map
Changes pending
Export study data
Download raw data from each phase for further analysis.
AI Settings
AI Provider
API Key
Enter an API key to test the connection.
Your key is stored only in this browser. It is never sent to any server other than your chosen AI provider.
Working…
This takes about 10–15 seconds
Import site map from CSV
Review the file and choose how to parse it.
file.csv
File preview (first rows)
Generate tree test tasks
AI will read your site map, study description, and research plan to write realistic scenarios.
Number of tasks
10 is the standard. Each task targets a specific node in your site map — you can edit, add, or remove tasks after generation.